773 research outputs found

    Applicazioni della valenza riflessiva dello European Language Portfolio

    Get PDF
    Nell’anno accademico 2004-05 il Centro Linguistico dell’Università di Trieste ha avviato un esperimento sul Portfolio Europeo delle Lingue utilizzando la traduzione italiana AICLU della versione del CercleS per studenti universitari. In quest’articolo verranno illustrati alcuni presupposti teorici dell’autovalutazione e della riflessione, le fasi iniziali dell’esperimento svolto con un gruppo di studenti della Facoltà di Scienze della Formazione, e infine, i problemi traduttivi incontrati nella trasposizione del European Language Portfolio (ELP) dall’inglese all’italiano

    The Role of Vision on Spatial Competence

    Get PDF
    Several pieces of evidence indicate that visual experience during development is fundamental to acquire long-term spatial capabilities. For instance, reaching abilities tend to emerge at 5 months of age in sighted infants, while only later at 10 months of age in blind infants. Moreover, other spatial skills such as auditory localization and haptic orientation discrimination tend to be delayed or impaired in visually impaired children, with a huge impact on the development of sighted-like perceptual and cognitive asset. Here, we report an overview of studies showing that the lack of vision can interfere with the development of coherent multisensory spatial representations and highlight the contribution of current research in designing new tools to support the acquisition of spatial capabilities during childhood

    Multisensory interactive technologies for primary education: From science to technology

    Get PDF
    While technology is increasingly used in the classroom, we observe at the same time that making teachers and students accept it is more difficult than expected. In this work, we focus on multisensory technologies and we argue that the intersection between current challenges in pedagogical practices and recent scientific evidence opens novel opportunities for these technologies to bring a significant benefit to the learning process. In our view, multisensory technologies are ideal for effectively supporting an embodied and enactive pedagogical approach exploiting the best-suited sensory modality to teach a concept at school. This represents a great opportunity for designing technologies, which are both grounded on robust scientific evidence and tailored to the actual needs of teachers and students. Based on our experience in technology-enhanced learning projects, we propose six golden rules we deem important for catching this opportunity and fully exploiting it

    Young children do not integrate visual and haptic information

    Get PDF
    Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability. To date no studies have investigated when this capacity for cross-modal integration develops. Here we show that prior to eight years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions where the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, while for orientation discrimination vision dominates. By eight-ten years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources

    Task-dependent calibration of auditory spatial perception through environmental visual observation

    Get PDF
    Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks (minimum audible angle and space bisection) and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the minimum audible angle after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system

    Cross-modal facilitation of visual and tactile motion

    Get PDF
    Robust and versatile perception of the world is augmented considerably when information from our five separate sensory systems is combined. Much recent evidence has demonstrated near-optimal integration across senses, but it remains unclear at what level the integration occurs, at a "sensory" or "decisional" level. Here we show that non-informative "pedestal" motion stimuli in one sensory modality (vision or touch) selectively lowers thresholds in the other, to the same degree as pedestals in the same modality: strong evidence for functionally important cross-sensory integration at early levels of sensory processing

    Visual representations of time elicit early responses in human temporal cortex

    Get PDF
    Time perception is inherently part of human life. All human sensory modalities are always involved in the complex task of creating a temporal representation of the external world. However, when representing time, people primarily rely on auditory information. Since the auditory system prevails in many audio-visual temporal tasks, one may expect that the early recruitment of the auditory network is necessary for building a highly resolved and flexible temporal representation in the visual modality. To test this hypothesis, we asked 17 healthy participants to temporally bisect three consecutive flashes while we recorded EEG. We demonstrated that visual stimuli during temporal bisection elicit an early (50–90 ms) response of an extended area of the temporal cortex, likely including auditory cortex too. The same activation did not appear during an easier spatial bisection task. These findings suggest that the brain may use auditory representations to deal with complex temporal representation in the visual system

    Cross-Sensory Facilitation Reveals Neural Interactions between Visual and Tactile Motion in Humans

    Get PDF
    Many recent studies show that the human brain integrates information across the different senses and that stimuli of one sensory modality can enhance the perception of other modalities. Here we study the processes that mediate cross-modal facilitation and summation between visual and tactile motion. We find that while summation produced a generic, non-specific improvement of thresholds, probably reflecting higher-order interaction of decision signals, facilitation reveals a strong, direction-specific interaction, which we believe reflects sensory interactions. We measured visual and tactile velocity discrimination thresholds over a wide range of base velocities and conditions. Thresholds for both visual and tactile stimuli showed the characteristic “dipper function,” with the minimum thresholds occurring at a given “pedestal speed.” When visual and tactile coherent stimuli were combined (summation condition) the thresholds for these multisensory stimuli also showed a “dipper function” with the minimum thresholds occurring in a similar range to that for unisensory signals. However, the improvement of multisensory thresholds was weak and not directionally specific, well predicted by the maximum-likelihood estimation model (agreeing with previous research). A different technique (facilitation) did, however, reveal direction-specific enhancement. Adding a non-informative “pedestal” motion stimulus in one sensory modality (vision or touch) selectively lowered thresholds in the other, by the same amount as pedestals in the same modality. Facilitation did not occur for neutral stimuli like sounds (that would also have reduced temporal uncertainty), nor for motion in opposite direction, even in blocked trials where the subjects knew that the motion was in the opposite direction showing that the facilitation was not under subject control. Cross-sensory facilitation is strong evidence for functionally relevant cross-sensory integration at early levels of sensory processing

    Audio Cortical Processing in Blind Individuals

    Get PDF
    This chapter focuses on the cortical processing of auditory spatial information in blindness. Research has demonstrated enhanced auditory processing in blind individuals, suggesting they compensate for lacking vision with greater sensitivity in other senses. A few years ago, we demonstrated severely impaired auditory precision in congenitally blind individuals when performing an auditory spatial metric task: participants’ thresholds for spatially bisecting three consecutive, spatially distributed sound sources were seriously compromised. Here we describe psychophysical and neural correlates of this deficit, and we show that the deficit disappears if blind individuals are presented with coherent spatio-temporal cues (short space associated with short time and vice versa). Instead, when the audio information presents incoherent spatio-temporal cues (short space associated with long time and vice versa), sighted individuals are unaffected by the perturbation while blind individuals are strongly attracted to the temporal cue. These results suggest that blind participants use temporal cues to make audio spatial estimations and that the visual cortex seems to have a functional role in these perceptual tasks. In the present chapter, we illustrate our hypothesis, suggesting that the lack of vision may drive construction of multisensory cortical network coding space based on temporal instead of spatial coordinates
    • …
    corecore